A preliminary empirical study to compare MPI and OpenMP productivity

نویسندگان

  • Lorin Hochstein
  • Victor R. Basili
چکیده

Context: The rise of multicore is bringing shared-memory parallelism to the masses. The community is struggling to identify which parallel models are most productive. Objective: Measure the effect of MPI and OpenMP models on programmer productivity. Design: One group of programmers solved the sharks and fishes problem using MPI and a second group solved the same problem using OpenMP, then each programmer switched models and solved the same problem again. The participants were graduate students in an HPC course. Measures: Development effort (hours), program correctness (grades), program performance (speedup versus serial implementation). Results: Mean OpenMP development time was 9.6 hours less than MPI (95% CI, 0.37− 19 hours), a 43% reduction. No statistically significant difference was observed in assignment grades. MPI performance was better than OpenMP performance for 4 out of the 5 students that submitted correct implementations for both models. Conclusions: OpenMP solutions for this problem required less effort than MPI, but insufficient power to measure the effect on correctness. The performance data was insufficient to draw strong conclusions but suggests that unoptimized MPI programs perform better than unoptimized OpenMP programs, even with a similar parallelization strategy. Further studies are necessary to examine different programming problems, models, and levels of programmer experience.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parallel computing using MPI and OpenMP on self-configured platform, UMZHPC.

Parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.In this paper, we introduce a novel platform for parallel computing by using MPI and OpenMP programming languages based on set of networked PCs. UMZHPC is a free Linux-based parallel computing infrastructure that has been developed to cr...

متن کامل

Entropy based Malmquist Productivity Index in Data Envelopment Analysis

Malmquist Productivity Index (MPI) is one of the most famous indices, which is used for estimating the productivity change of a Decision Making Unit (DMU) during the time. Virtually any empirical study that uses MPI, reports average of the productivity indices they estimate to represent the overall tendency in productivity changes. In such a case, productivity indices of a DMU are considered wi...

متن کامل

Military hospitals efficiency evaluation: Application of Malmquist Productivity Index-Data Envelopment Analysis

The purposed-the main purpose of this study is to look at the development of efficiency and productivity in the military hospital sector of Tehran province, by applying a nonparametric method. Design/methodology/approach- The study applied the nonparametric method to assess the efficiency of military hospitals service in Iran, Tehran, over the period 2013-2016. Utilizing non-parametric me...

متن کامل

Automatic Scaling of OpenMP Beyond Shared Memory

OpenMP is an explicit parallel programming model that offers reasonable productivity. Its memory model assumes a shared address space, and hence the direct translation as done by common OpenMP compilers requires an underlying shared-memory architecture. Many lab machines include 10s of processors, built from commodity components and thus include distributed address spaces. Despite many efforts ...

متن کامل

Implementation and performance evaluation of SPAM particle code with OpenMP-MPI hybrid programming

In this paper, we implement a SPAM (Smooth Particle Applied Mechanics) code in both pure MPI and MPI-OpenMP hybrid manner, then compare and analize the performance of them on an SMPPC cluster. Our SPAM code is described to handle any mapping of spatial cells on to parallel MPI processes to exploit well load-balancing even with a relatively high communication cost. First we implement a paralleli...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011